11 research outputs found

    Techniques and Apparatuses for Variable-Display Devices to Capture Screen-Fitting Images with a Maximized Field of View

    Get PDF
    This publication describes techniques and apparatuses, implemented on variable-display devices (e.g., foldable devices, laptops, multi-display devices), directed at capturing screen-fitting still images or video streams with a maximized photographic field of view, regardless of display size and device orientation. In aspects, a variable-display device utilizing a square image sensor with a diagonal greater or equal in size to the diameter of an image circle (e.g., the cross-section of the scenic light focused on an image sensor by means of one or more lenses) enables the device to crop to the usual photographic and video aspect ratios (e.g., 4:3, 16:9)

    The Lutonium: A Sub-Nanojoule Asynchronous 8051 Microcontroller

    Get PDF
    We describe the Lutonium, an asynchronous 8051 microcontroller designed for low Et/sup 2/. In 0.18 /spl mu/m CMOS, at nominal 1.8 V, we expect a performance of 0.5 nJ per instruction at 200 MIPS. At 0.5 V, we expect 4 MIPS and 40 pJ/instruction, corresponding to 25,000 MIPS/Watt. We describe the structure of a fine-grain pipeline optimized for Et/sup 2/ efficiency, some of the peripherals implementation, and the advantages of an asynchronous implementation of a deep-sleep mechanism

    Veiling glare in high dynamic range imaging

    No full text
    The ability of a camera to record a high dynamic range image, whether by taking one snapshot or a sequence, is limited by the presence of veiling glare- the tendency of bright objects in the scene to reduce the contrast everywhere within the field of view. Veiling glare is a global illumination effect that arises from multiple scattering of light inside the camera’s body and lens optics. By measuring separately the direct and indirect components of the intracamera light transport, one can increase the maximum dynamic range a particular camera is capable of recording. In this paper, we quantify the presence of veiling glare and related optical artifacts for several types of digital cameras, and we describe two methods for removing them: deconvolution by a measured glare spread function, and a novel direct-indirect separation of the lens transport using a structured occlusion mask. In the second method, we selectively block the light that contributes to veiling glare, thereby attaining significantly higher signal-to-noise ratios than with deconvolution. Finally, we demonstrate our separation method for several combinations of cameras and realistic scenes

    Symmetric Photography: Exploiting Data-sparseness in

    No full text
    Figure 1: The reflectance field of a glass full of gummy bears is captured using two coaxial projector/camera pairs placed 120 ◦ apart. (a) is the result of relighting the scene from the front projector, which is coaxial with the presented view, where the (synthetic) illumination consists of the letters “EGSR”. Note that due to their sub-surface scattering property, even a single beam of light that falls on a gummy bear illuminates it completely, although unevenly. In (b) we simulate homogeneous backlighting from the second projector combined with the illumination used in (a). For validation, a ground-truth image (c) was captured by loading the same projector patterns into the real projectors. Our approach is able to faithfully capture and reconstruct the complex light transport in this scene. (d) shows a typical frame captured during the acquisition process with the corresponding projector pattern in the inset. We present a novel technique called symmetric photography to capture real world reflectance fields. The technique models the 8D reflectance field as a transport matrix between the 4D incident light field and the 4D exitant light field. It is a challenging task to acquire this transport matrix due to its large size. Fortunately, the transport matrix is symmetric and often data-sparse. Symmetry enables us to measure the light transport from two sides simultaneously, from the illumination directions and the view directions. Data-sparseness refers to the fact that sub-blocks of the matrix can be well approximated using low-rank representations. We introduce the use of hierarchical tensors as the underlying data structure to capture this data-sparseness, specifically through local rank-1 factorization

    High performance imaging using large camera arrays

    No full text
    Figure 1: Different configurations of our camera array. (a) Tightly packed cameras with telephoto lenses and splayed fields of view. This arrangement is used for high-resolution imaging (section 4.1). (b) Tightly packed cameras with wide-angle lenses, which are aimed to share the same field of view. We use this arrangement for high-speed video capture (section 4.2) and for hybrid aperture imaging (section 6.2). (c) Cameras in a widely spaced configuration. Also visible are cabinets with processing boards for each camera and the four host PCs needed to run the system. The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures
    corecore